Hadoop FS: Use the widest range of surfaces to manipulate any file system.Hadoop DFS and HDFs DFS: can only operate on HDFs file system-related (including operations with local FS), which is already deprecated, typically using the latter.The following reference is from StackOverflowFollowing is the three
Hadoop FS: Use the widest range of surfaces to manipulate any file system.Hadoop DFS and HDFs DFS: can only operate on HDFs file system-related (including operations with local FS), which is already deprecated, typically using the latter.The following reference is from StackOverflowFollowing is the three
http://blog.csdn.net/pipisorry/article/details/51340838the difference between ' Hadoop DFS ' and ' Hadoop FS 'While exploring HDFs, I came across these II syntaxes for querying HDFs:> Hadoop DFS> Hadoop FSWhy we have both different syntaxes for a common purposeWhy are there two command flags for the same feature? The d
Hadoop FS: The widest range of users can operate any file system.
Hadoop DFS and HDFs dfs: only HDFs file system related (including operations with local FS) can be manipulated, the former has been deprecated, generally using the latter.
The following reference from StackOverflow
Following are the three
. Leave Safe Mode method
Bin/hadoop Dfsadmin-safemode Leave
hadoop2.7.1 relative path is not available, it seems to have to be created with absolute path ....Bin/hdfs dfs-mkdir input error hint "ls: ' input ': No such file or directory"(Environment is hadoop2.7 CentOS 64-bit)
The first step must be replaced by Bin/hdfs Dfs
The latest stable version of hadoop2.2.0 is deployed and installed, and the fuse-dfs compilation tutorial is found on the Internet, but the final failure occurs. The cause is unknown ~~, Error Description: Transport endpoint is not connected. Hadoop1.2.1 will be installed and deployed, and the test is successful. The record is as follows:
Use root to complete the following operations:
1. Install the dependency package
apt-get install autoconf automak
The latest stable version of hadoop2.2.0 is deployed and installed, and the fuse-DFS compilation tutorial is found on the Internet, but the final failure occurs. The cause is unknown ~~, Error Description: transport endpoint is not connected. Hadoop1.2.1 will be installed and deployed, and the test is successful. The record is as follows:
Use root to complete the following operations:
1. Install the dependency package
apt-get install autoconf automak
Write more verbose, if you are eager to find the answer directly to see the bold part of the ....
(PS: What is written here is all the content in the official document of the 2.5.2, the problem I encountered when I did it)
When you execute a mapreduce job locally, you encounter the problem of No such file or directory, follow the steps in the official documentation:
1. Formatting Namenode
Bin/hdfs Namenode-format
2. Start the Namenode and Datanod
Sudo addgroup hadoop # Add a hadoop GroupSudo usermod-a-g hadoop Larry # Add the current user to the hadoop GroupSudo gedit ETC/sudoers # Add the hadoop group to sudoerHadoop all = (all) All after root all = (all) All
Modify hadoop Directory PermissionsSudo chown-r Larry: hadoop/home/Larry/hadoop
Sudo chmod-r 755/home/Larry/hadoop
Modify HDFS PermissionsSudo bin/hadoop DFS-chmod-r 755/Sudo bin/hadoop
Add a Hadoop group
sudo addgroup Hadoop
Add the current user Larry to the Hadoop groupsudo usermod-a-G Hadoop Larry
Add Hadoop Group to Sudoersudo gedit etc/sudoersHadoop all= (All) after Root all= (all)
Modify the permissions for the Hadoop directorysudo chown-r larry:hadoop/home/larry/hadoop
Modify permissions for HDFssudo chmod-r 755/home/larry/hadoopsudo bin/hadoop dfs-chmod-r 755/sudo bin/hadoop dfs-l
HDFs Common commands:Note: The following execution commands are in the bin directory of the Spark installation directory.Path src for file path dist to folder1.-help[cmd] Show Help for commands
./hdfs Dfs-help ls
2.-ls (r) displays all files in the current directory-R layer
mediaHDFs only provides a heterogeneous storage structure, and does not know the performance of storage media;HDFS provides the user with an API to control what media the directory/file is written to;HDFS provides administrators with administrative tools to limit the available share of each media per user; The current level of completion is lowPhase 1:datanode supports heterogeneous storage media (
Common HDFS file operation commands and precautions
The HDFS file system provides a considerable number of shell operation commands, which greatly facilitates programmers and system administrators to view and modify files on HDFS. Furthermore,
under the directory, the X permission indicates that the sub-directory can be accessed from this directory. Unlike the POSIX model, HDFS does not contain sticky, setuid, and setgid.
HDFS is designed to process massive data, that is, it can store a large number of files (Tb-level files) on it. After HDFS splits these files, it is stored on different datanode
From:http://www.2cto.com/database/201303/198460.htmlHadoop HDFs Common CommandsHadoop common commands:Hadoop FSView all commands supported by Hadoop HDFsHadoop fs–lslisting directory and file informationHadoop FS–LSRLoop lists directories, subdirectories, and file informationHadoop fs–put Test.txt/user/sunlightcsCopy the test.txt of the local file system to the/user/sunlightcs directory of the
, soHDFs has a high degree of fault tolerance.3. High data throughput HDFs uses a "one-time write, multiple read" This simple data consistency model, in HDFS , once a file has been created, written, closed, generally do not need to modify, such a simple consistency model, to improve throughput.4. Streaming data access HDFS has a large scale of data processing,
[cmd ...]Management commands for HDFs[Email protected] hadoop-2.7.3]$ bin/HDFs Dfsadminusage:hdfs dfsadminnote:administrative commands can only be run as the HDFs Superuser. [-report [-live] [-dead] [-Decommissioning]] [-safemode ] [-Savenamespace] [-Rolledits] [-restore
Create a table of contents
Hadoop dfs-mkdir/homeUploading files or directories to HDFs
Hadoop dfs-put Hello/Hadoop dfs-put hellodir//View Table of Contents
Hadoop Dfs-ls/Create an empty file
Hadoop Dfs-touchz/361wayDelete a fi
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.